29 research outputs found
Distributed optimization over time-varying directed graphs
We consider distributed optimization by a collection of nodes, each having
access to its own convex function, whose collective goal is to minimize the sum
of the functions. The communications between nodes are described by a
time-varying sequence of directed graphs, which is uniformly strongly
connected. For such communications, assuming that every node knows its
out-degree, we develop a broadcast-based algorithm, termed the
subgradient-push, which steers every node to an optimal value under a standard
assumption of subgradient boundedness. The subgradient-push requires no
knowledge of either the number of agents or the graph sequence to implement.
Our analysis shows that the subgradient-push algorithm converges at a rate of
, where the constant depends on the initial values at the
nodes, the subgradient norms, and, more interestingly, on both the consensus
speed and the imbalances of influence among the nodes
Cloud-Based Centralized/Decentralized Multi-Agent Optimization with Communication Delays
We present and analyze a computational hybrid architecture for performing
multi-agent optimization. The optimization problems under consideration have
convex objective and constraint functions with mild smoothness conditions
imposed on them. For such problems, we provide a primal-dual algorithm
implemented in the hybrid architecture, which consists of a decentralized
network of agents into which centralized information is occasionally injected,
and we establish its convergence properties. To accomplish this, a central
cloud computer aggregates global information, carries out computations of the
dual variables based on this information, and then distributes the updated dual
variables to the agents. The agents update their (primal) state variables and
also communicate among themselves with each agent sharing and receiving state
information with some number of its neighbors. Throughout, communications with
the cloud are not assumed to be synchronous or instantaneous, and communication
delays are explicitly accounted for in the modeling and analysis of the system.
Experimental results are presented to support the theoretical developments
made.Comment: 8 pages, 4 figure
Tailoring Gradient Methods for Differentially-Private Distributed Optimization
Decentralized optimization is gaining increased traction due to its
widespread applications in large-scale machine learning and multi-agent
systems. The same mechanism that enables its success, i.e., information sharing
among participating agents, however, also leads to the disclosure of individual
agents' private information, which is unacceptable when sensitive data are
involved. As differential privacy is becoming a de facto standard for privacy
preservation, recently results have emerged integrating differential privacy
with distributed optimization. Although such differential-privacy based privacy
approaches for distributed optimization are efficient in both computation and
communication, directly incorporating differential privacy design in existing
distributed optimization approaches significantly compromises optimization
accuracy. In this paper, we propose to redesign and tailor gradient methods for
differentially-private distributed optimization, and propose two
differential-privacy oriented gradient methods that can ensure both privacy and
optimality. We prove that the proposed distributed algorithms can ensure almost
sure convergence to an optimal solution under any persistent and
variance-bounded differential-privacy noise, which, to the best of our
knowledge, has not been reported before. The first algorithm is based on
static-consensus based gradient methods and only shares one variable in each
iteration. The second algorithm is based on dynamic-consensus
(gradient-tracking) based distributed optimization methods and, hence, it is
applicable to general directed interaction graph topologies. Numerical
comparisons with existing counterparts confirm the effectiveness of the
proposed approaches
On Stochastic Subgradient Mirror-Descent Algorithm with Weighted Averaging
This paper considers stochastic subgradient mirror-descent method for solving
constrained convex minimization problems. In particular, a stochastic
subgradient mirror-descent method with weighted iterate-averaging is
investigated and its per-iterate convergence rate is analyzed. The novel part
of the approach is in the choice of weights that are used to construct the
averages. Through the use of these weighted averages, we show that the known
optimal rates can be obtained with simpler algorithms than those currently
existing in the literature. Specifically, by suitably choosing the stepsize
values, one can obtain the rate of the order for strongly convex
functions, and the rate for general convex functions (not
necessarily differentiable). Furthermore, for the latter case, it is shown that
a stochastic subgradient mirror-descent with iterate averaging converges (along
a subsequence) to an optimal solution, almost surely, even with the stepsize of
the form , which was not previously known. The stepsize choices
that achieve the best rates are those proposed by Paul Tseng for acceleration
of proximal gradient methods